The ability to jointly learn from multiple modalities, such as text, audio, and visual data, is a defining feature of intelligent systems. While there have been promising advances in designing neural networks to harness multimodal data, the enormous success of data augmentation currently remains limited to single-modality tasks like image classification. Indeed, it is particularly difficult to augment each modality while preserving the overall semantic structure of the data; for example, a caption may no longer be a good description of an image after standard augmentations have been applied, such as translation. Moreover, it is challenging to specify reasonable transformations that are not tailored to a particular modality. In this paper, we introduce LeMDA, Learning Multimodal Data Augmentation, an easy-to-use method that automatically learns to jointly augment multimodal data in feature space, with no constraints on the identities of the modalities or the relationship between modalities. We show that LeMDA can (1) profoundly improve the performance of multimodal deep learning architectures, (2) apply to combinations of modalities that have not been previously considered, and (3) achieve state-of-the-art results on a wide range of applications comprised of image, text, and tabular data.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Multimodal image-text models have shown remarkable performance in the past few years. However, evaluating their robustness against distribution shifts is crucial before adopting them in real-world applications. In this paper, we investigate the robustness of 9 popular open-sourced image-text models under common perturbations on five tasks (image-text retrieval, visual reasoning, visual entailment, image captioning, and text-to-image generation). In particular, we propose several new multimodal robustness benchmarks by applying 17 image perturbation and 16 text perturbation techniques on top of existing datasets. We observe that multimodal models are not robust to image and text perturbations, especially to image perturbations. Among the tested perturbation methods, character-level perturbations constitute the most severe distribution shift for text, and zoom blur is the most severe shift for image data. We also introduce two new robustness metrics (MMI and MOR) for proper evaluations of multimodal models. We hope our extensive study sheds light on new directions for the development of robust multimodal models.
translated by 谷歌翻译
Models should be able to adapt to unseen data during test-time to avoid performance drops caused by inevitable distribution shifts in real-world deployment scenarios. In this work, we tackle the practical yet challenging test-time adaptation (TTA) problem, where a model adapts to the target domain without accessing the source data. We propose a simple recipe called \textit{Data-efficient Prompt Tuning} (DePT) with two key ingredients. First, DePT plugs visual prompts into the vision Transformer and only tunes these source-initialized prompts during adaptation. We find such parameter-efficient finetuning can efficiently adapt the model representation to the target domain without overfitting to the noise in the learning objective. Second, DePT bootstraps the source representation to the target domain by memory bank-based online pseudo-labeling. A hierarchical self-supervised regularization specially designed for prompts is jointly optimized to alleviate error accumulation during self-training. With much fewer tunable parameters, DePT demonstrates not only state-of-the-art performance on major adaptation benchmarks VisDA-C, ImageNet-C, and DomainNet-126, but also superior data efficiency, i.e., adaptation with only 1\% or 10\% data without much performance degradation compared to 100\% data. In addition, DePT is also versatile to be extended to online or multi-source TTA settings.
translated by 谷歌翻译
高光谱成像技术(HSI)在远程分布光谱波长上记录了视觉信息。代表性的高光谱图像采集程序通过编码的光圈快照光谱成像器(CASSI)进行了3D到2D的编码,并且需要用于3D信号重建的软件解码器。基于此编码程序,两个主要挑战妨碍了高保真重建的方式:(i)获得2D测量值,CASSI通过分散器触觉并将其挤压到同一空间区域,从。 (ii)物理编码的光圈(掩码)将通过选择性阻止像素的光曝光来导致掩盖数据丢失。为了应对这些挑战,我们提出了具有面膜感知的学习策略的空间光谱(S2-)变压器体系结构。首先,我们同时利用空间和光谱注意模型来沿两个维度划分2D测量中的混合信息。空间和光谱线索跨的一系列变压器结构是系统设计的,它考虑了两倍提示之间的信息相互依赖性。其次,蒙面的像素将引起更高的预测难度,应与未掩盖的像素不同。因此,我们通过推断出对蒙版意识预测的难度级别来适应归因于面具结构的损失惩罚。我们提出的方法不仅定量设置了新的最新方法,而且在结构化区域中产生了更好的感知质量。
translated by 谷歌翻译
语义本地化(SELO)是指使用语义信息(例如文本)在大规模遥感(RS)图像中获得最相关位置的任务。作为基于跨模式检索的新兴任务,Selo仅使用字幕级注释来实现语义级检索,这表明了其在统一下游任务方面的巨大潜力。尽管Selo已连续执行,但目前没有系统地探索并分析了这一紧急方向。在本文中,我们彻底研究了这一领域,并根据指标和测试数据提供了完整的基准,以推进SELO任务。首先,基于此任务的特征,我们提出了多个判别评估指标来量化SELO任务的性能。设计的显着面积比例,注意力转移距离和离散的注意距离可用于评估从像素级别和区域级别中产生的SELO图。接下来,为了为SELO任务提供标准评估数据,我们为多样化的,多语义的,多目标语义定位测试集(AIR-SLT)贡献。 AIR-SLT由22个大型RS图像和59个具有不同语义的测试用例组成,旨在为检索模型提供全面的评估。最后,我们详细分析了RS跨模式检索模型的SELO性能,探索不同变量对此任务的影响,并为SELO任务提供了完整的基准测试。我们还建立了一个新的范式来引用RS表达理解,并通过将其与检测和道路提取等任务相结合,证明了Selo在语义中的巨大优势。拟议的评估指标,语义本地化测试集和相应的脚本已在github.com/xiaoyuan1996/semanticlocalizationmetrics上访问。
translated by 谷歌翻译
3D场景感性风格化旨在根据给定的样式图像从任意新颖的视图中生成光真逼真的图像,同时在从不同观点呈现时确保一致性。一些带有神经辐射场的现有风格化方法可以通过将样式图像的特征与多视图图像结合到训练3D场景来有效地预测风格化的场景。但是,这些方法生成了包含令人反感的伪影的新型视图图像。此外,他们无法为3D场景实现普遍的影迷风格化。因此,样式图像必须根据神经辐射场重新训练3D场景表示网络。我们提出了一个新颖的3D场景,逼真的风格转移框架来解决这些问题。它可以通过2D样式图像实现感性3D场景样式转移。我们首先预先训练了2D逼真的样式传输网络,该网络可以符合任何给定内容图像和样式图像之间的影片风格转移。然后,我们使用体素特征来优化3D场景并获得场景的几何表示。最后,我们共同优化了一个超级网络,以实现场景的逼真风格传输的任意样式图像。在转移阶段,我们使用预先训练的2D影视网络来限制3D场景中不同视图和不同样式图像的感性风格。实验结果表明,我们的方法不仅实现了任意样式图像的3D影像风格转移,而且还优于视觉质量和一致性方面的现有方法。项目页面:https://semchan.github.io/upst_nerf。
translated by 谷歌翻译
深度学习方法论为高光谱图像(HSI)分析社区的发展做出了很大贡献。但是,这也使HSI分析系统容易受到对抗攻击的影响。为此,我们在本文中提出了一个掩盖的空间光谱自动编码器(MSSA),根据自我监督的学习理论,以增强HSI分析系统的鲁棒性。首先,进行了一个掩盖的序列注意学习模块,以促进沿光谱通道的HSI分析系统的固有鲁棒性。然后,我们开发了一个具有可学习的图形结构的图形卷积网络,以建立全局像素的组合。这样,每种组合中的所有相关像素都可以分散攻击效果,并且在空间方面可以实现更好的防御性能。最后,为了提高防御能力并解决有限标记样品的问题,MSSA采用光谱重建作为借口任务,并以自我监督的方式适合数据集。 - 高光谱分类方法和代表性的对抗防御策略。
translated by 谷歌翻译
为了以计算有效的方式部署深层模型,经常使用模型量化方法。此外,由于新的硬件支持混合的位算术操作,最近对混合精度量化(MPQ)的研究开始通过搜索网络中不同层和模块的优化位低宽,从而完全利用表示的能力。但是,先前的研究主要是在使用强化学习,神经体系结构搜索等的昂贵方案中搜索MPQ策略,或者简单地利用部分先验知识来进行位于刻度分配,这可能是有偏见和优势的。在这项工作中,我们提出了一种新颖的随机量化量化(SDQ)方法,该方法可以在更灵活,更全球优化的空间中自动学习MPQ策略,并具有更平滑的梯度近似。特别是,可区分的位宽参数(DBP)被用作相邻位意选择之间随机量化的概率因素。在获取最佳MPQ策略之后,我们将进一步训练网络使用熵感知的bin正则化和知识蒸馏。我们广泛评估了不同硬件(GPU和FPGA)和数据集的多个网络的方法。 SDQ的表现优于所有最先进的混合或单个精度量化,甚至比较低的位置量化,甚至比各种重新网络和Mobilenet家族的全精度对应物更好,这表明了我们方法的有效性和优势。
translated by 谷歌翻译
神经建筑搜索(NAS)算法可节省人类专家的巨大劳动。最近的进步进一步将计算开销降低到负担得起的水平。但是,由于挑剔的程序和监督的学习范式,将NAS技术部署在现实世界应用程序中仍然很麻烦。在这项工作中,我们通过允许自我审议并保留在搜索阶段发现的伴随的权重,提出了自我监管和举重的神经体系结构搜索(SSWP-NAS)作为当前NAS框架的扩展。因此,我们将NAS的工作流程简化为单阶段和无代理程序。实验表明,通过所提出的框架搜索的架构实现了CIFAR-10,CIFAR-100和Imagenet数据集上的最新精度,而无需使用手动标签。此外,我们表明,使用伴随的权重作为初始化始终优于随机初始化和两阶段的权重预训练方法,在半监督的学习方案下清晰的边缘。代码可在https://github.com/lzvv123456/sswp-nas上公开获得。
translated by 谷歌翻译